Okay, hello everyone. Before we start, I would like to give you some news. One thing that
I would like to tell you is that this Friday at 8.30, Professor Meyer offers another Q&A
session. So it will be again at the same time slot, Friday 8.30, and teams. But he
said that he would throughout the semester also switch slots for people who cannot attend
at this time. As always, if you have questions regarding the lecture or the exercise, you
can always contact us also for the case that you don't have time at this Q&A session.
Another thing that I would like to tell you is that Professor Meyer is creating blog posts
with the transcript of his videos. So if you prefer reading, you can just go through the
link that I just gave you or look up Stutton, where each video also will have its own blog
post, which will be the same content as in the video. But if you prefer reading or if
you prefer reading for the preparation for the exam, then you can also instead of watching
the videos again, also go through the slides or read his blog posts. Yeah, just to give
you like this info. Like for today's exercise, I will mostly write on the screen, which means
that I cannot always see the chat window. So when you have a question, I think the best
thing would be to interrupt me right away and ask your question. But I will also try
to at the end of each task that we will be solving, that will switch to the chat window
and I also try to answer questions from the chat. But usually it's like a lot better if
you like discover a mistake in my calculation that you just say, yeah, isn't this a mistake
and just interrupt me. So you're welcome to speak whenever you want to. So before we start,
are there any questions regarding like the general organization and yeah. Hey, hey, you
have a question? No, I don't. I just wanted to greet you. That's nice. People if they
want to let them see, I think for you at least, right? Yeah. But I will start screeching right
now, which means that you should now be able to see my screen. Can you see my screen and
also the stuff that I'm writing on the screen should see some fancy lines. Yeah. So what
I'm trying to do is like, we will have like mostly blank slides here. So I'm trying to
solve like the exercises during our live session. But yeah, if you want to, you can take the
notes and also like try to derive all the stuff here and handwritten. It's possible
that my handwriting is bad. And for that, like after my solution, my handwritten solution,
it should also be a printed solution in the slides. So you can resort to the printed version
if you cannot read my writing. But yeah, like it's usually best if you just interrupt me
whenever there's like a problem in reading something I've wrote. Okay. Like a short recap
on the last exercise session. So there we started with like our pattern recognition
topic, which was like our like first classifier that we will like learn this semester. And
usually for the exam, it's important that every classifier we learn here is usually
good for something. You should know what the classifier more or less does, in which category
it's the best one and what are maybe disadvantages of this classifier. So I try to do this again
for the Bayesian classifier because like we will also base our exercise today on the Bayesian
classifier. In principle, the Bayes classifier is very simple from the concept standpoint.
Like if you have like all knowledge in the world, because it basically says when you
want to classify your sample X, let's say we know X and this notation means we try to
get the probability from Y given an X so that we know the X in this moment. Let's say we
our X is this vector here. Our Bayesian classifier would basically do the following.
It would try to calculate all the posterior probabilities here for all our classes. And
for example, Y could be, we have to predict Y, but we maybe have only two possibilities for Y.
Like for example, like it could either be Y could be zero and or Y could be one. So to
do our classification, we would calculate the probability for both possibilities. So once
the probability for Y equals zero given our feature vector. And now I have to,
I cannot see this because like the zoom is showing like a video window in this spot.
I can write and yeah, if you are like almighty, we would know all the probabilities and could
for example, calculate this and for example, get like 80% for this probability for that
Zugänglich über
Offener Zugang
Dauer
01:37:36 Min
Aufnahmedatum
2020-11-26
Hochgeladen am
2020-11-26 12:19:43
Sprache
en-US